133 research outputs found

    A Multiple Indicators Model for Volatility Using Intra-Daily Data

    Get PDF
    Many ways exist to measure and model financial asset volatility. In principle, as the frequency of the data increases, the quality of forecasts should improve. Yet, there is no consensus about a true' or best' measure of volatility. In this paper we propose to jointly consider absolute daily returns, daily high-low range and daily realized volatility to develop a forecasting model based on their conditional dynamics. As all are non-negative series, we develop a multiplicative error model that is consistent and asymptotically normal under a wide range of specifications for the error density function. The estimation results show significant interactions between the indicators. We also show that one-month-ahead forecasts match well (both in and out of sample) the market-based volatility measure provided by an average of implied volatilities of index options as measured by VIX.

    Vector Multiplicative Error Models: Representation and Inference

    Get PDF
    The Multiplicative Error Model introduced by Engle (2002) for positive valued processes is specified as the product of a (conditionally autoregressive) scale factor and an innovation process with positive support. In this paper we propose a multi-variate extension of such a model, by taking into consideration the possibility that the vector innovation process be contemporaneously correlated. The estimation procedure is hindered by the lack of probability density functions for multivariate positive valued random variables. We suggest the use of copulafunctions and of estimating equations to jointly estimate the parameters of the scale factors and of the correlations of the innovation processes. Empirical applications on volatility indicators are used to illustrate the gains over the equation by equation procedure.

    Flexible Tool for Model Building: the Relevant Transformation of the Inputs Network Approach (RETINA)

    Get PDF
    A new method, called Relevant Transformation of the Inputs Network Approach (RETINA) isproposed as a tool for model building. It is designed around flexibility (with nonlinear transformations of the predictors of interest), selective search within the range of possible models, out-of-sample forecasting ability and computational simplicity. In tests on simulated data, it shows both a high rate of successful retrieval of the DGP which increases with the sample size and a good performance relative to other alternative procedures. A telephone service demand model is built to show how the procedure applies on real data.Relevant Transformation of the Inputs Network Approach (RETINA), Economics models

    Copycats and Common Swings: The Impact of the Use of Forecasts in Information Sets

    Get PDF
    This paper presents evidence, using data from Consensus Forecasts, that there is an "attraction" to conform to the mean forecasts; in other words, views expressed by other forecasters in the previous period influence individuals' current forecast. The paper then discusses--and provides further evidence on--two important implications of this finding. The first is that the forecasting performance of these groups may be severely affected by the detected imitation behavior and lead to convergence to a value that is not the "right" target. Second, since the forecasts are not independent, the common practice of using the standard deviation from the forecasts' distribution, as if they were standard errors of the estimated mean, is not warranted. Copyright 2002, International Monetary Fund

    A flexible Tool for Model Building: the Relevant Transformation of the Inputs Network Approach (RETINA)

    Get PDF
    A new method, called relevant transformation of the inputs network approach (RETINA) is proposed as a tool for model building and selection. It is designed to improve some of the shortcomings of neural networks. It has the flexibility of neural network models, the concavity of the likelihood in the weights of the usual likelihood models, and the ability to identify a parsimonious set of attributes that are likely to be relevant for predicting out of sample outcomes. RETINA expands the range of models by considering transformations of the original inputs; splits the sample in three disjoint subsamples, sorts the candidate regressors by a saliency feature, chooses the models in subsample 1, uses subsample 2 for parameter estimation and subsample 3 for cross-validation. It is modular, can be used as a data exploratory tool and is computationally feasible in personal computers. In tests on simulated data, it achieves high rates of successes when the sample size or the R2 are large enough. As our experiments show, it is superior to alternative procedures such as the non negative garrote and forward and backward stepwise regression.

    The impact of the use of forecasts in information sets

    Get PDF
    We analyze the properties of multiperiod forecasts which are formulated by a number of companies for a fixed horizon ahead which moves each month one period closer and are collected and diffused each month by some polling agency. Some descriptive evidence and a formal model suggest that knowing the viewsexpressed by other forecasters the previous period is influencing individual current forecasts in the form of an attraction to conform to the mean forecast. There are two implications: one is that the forecasts polled in a multiperiod framework cannot be seen as independent from one another and hence the practice of using standard deviations from the forecasts' distribution as if they were standard errors of the estimated mean is not warranted. The second is that the forecasting performance of these groups may be severely affected by the detected imitation behavior and lead to convergence to a value which is not the right target (either the first available figure or some final values available at a later time). --multistep forecast,consensus forecast,preliminary data

    A Multiple Indicators Model For Volatility Using Intra-Daily Data

    Get PDF
    Many ways exist to measure and model financial asset volatility. In principle, as the frequency of the data increases, the quality of forecasts should improve. Yet, there is no consensus about a “true” or "best" measure of volatility. In this paper we propose to jointly consider absolute daily returns, daily high-low range and daily realized volatility to develop a forecasting model based on their conditional dynamics. As all are non-negative series, we develop a multiplicative error model that is consistent and asymptotically normal under a wide range of specifications for the error density function. The estimation results show significant interactions between the indicators. We also show that one-month-ahead forecasts match well (both in and out of sample) the market-based volatility measure provided by an average of implied volatilities of index options as measured by VIX

    Mixed-frequency quantile regression with realized volatility to forecast Value-at-Risk

    Full text link
    The use of quantile regression to calculate risk measures has been widely recognized in the financial econometrics literature. When data are observed at mixed-frequency, the standard quantile regression models are no longer adequate. In this paper, we develop a model built on a mixed-frequency quantile regression to directly estimate the Value-at-Risk. In particular, the low-frequency component incorporates information coming from variables observed at, typically, monthly or lower frequencies, while the high-frequency component can include a variety of daily variables, like realized volatility measures or market indices. We derive the conditions for the weak stationarity of the daily return process suggested while the finite sample properties are investigated in an extensive Monte Carlo exercise. The validity of the proposed model is then explored through a real data application using the most important financial indexes. We show that our model outperforms other competing specifications, using backtesting and Model Confidence Set procedures

    Modeling and evaluating conditional quantile dynamics in VaR forecasts

    Full text link
    We focus on the time-varying modeling of VaR at a given coverage τ\tau, assessing whether the quantiles of the distribution of the returns standardized by their conditional means and standard deviations exhibit predictable dynamics. Models are evaluated via simulation, determining the merits of the asymmetric Mean Absolute Deviation as a loss function to rank forecast performances. The empirical application on the Fama-French 25 value-weighted portfolios with a moving forecast window shows substantial improvements in forecasting conditional quantiles by keeping the predicted quantile unchanged unless the empirical frequency of violations falls outside a data-driven interval around τ\tau.Comment: 37 pages, 5 figures, 8 table
    • …
    corecore